Greg Detre
Monday, October 23, 2000
Carmen Houston-Price
1. Evaluate the conclusion that �negative evidence� does not play a central role in language acquisition.
2. Why should we believe that language acquisition consists in learning a system of linguistic rules?
Why should we believe that
language acquisition consists in learning a system of linguistic rules?
He�s looking at what linguistics in the past has to tell us. He starts in the 17th century, a century of genius, especially from the perspective of the Cartesian and Newtonian revolutions. He notes the interesting parallel between mind and gravity as irreducible effects on our otherwise mechanistic corporeal world.
He then points to two contrasting ways of looking at language:
�Philosophical grammar�, which introduced the idea of a transformational grammar to generate the �deep� structure from the �surface� (has recently been much misrepresented in contemporary accounts). Descartes pointed to the advanced use of language as a sign of the intelligence and possession of soul/mind that separates man and beast. This �creative� use of language can be defined in terms of:
our freedom/infinite variety of use of language
appropriateness
The structural linguistics of the last century may fitted in with behaviourist concepts, but it grossly oversimplified and misunderstood language in so doing.
The time has come to try and take what has been learnt recently and see it in the light of the more powerful paradigm of the �philosophical grammar�.
Language has two special features:
as Saussure noticed, its symbols are arbitrary � there is nothing which connects the word, �dog�, for instance, with dogs, other than our language community�s rote memorisation of the term
It is able to produce an �infinite combination(???) from finite means�. This is because language is a �discrete combinatorial system�.
Language acquisition is not
only inherently interesting; studying it is one way to look for concrete
answers to questions that permeate cognitive science:
Modularity.
Do children learn language using a "mental organ," some of whose
principles of organization are not shared with other cognitive systems such as
perception, motor control, and reasoning (Chomsky, 1975, 1991; Fodor, 1983)? Or
is language acquisition just another problem to be solved by general intelligence,
in this case, the problem of how to communicate with other humans over the
auditory channel (Putnam, 1971; Bates, 1989)?
Human
Uniqueness. A related question is whether language is unique to humans. At
first glance the answer seems obvious. Other animals communication with a fixed
repertoire of symbols, or with analogue variation like the mercury in a
thermometer. But none appears to have the combinatorial rule system of human
language, in which symbols are permuted into an unlimited set of combinations,
each with a determinate meaning. On the other hand, many other claims about
human uniqueness, such as that humans were the only animals to use tools or to
fabricate them, have turned out to be false. Some researchers have thought that
apes have the capacity for language but never profited from a humanlike
cultural milieu in which language was taught, and they have thus tried to teach
apes language-like systems. Whether they have succeeded, and whether human
children are really "taught" language themselves, are questions we
will soon come to.
Language
and Thought. Is language simply grafted on top of cognition as a way of
sticking communicable labels onto thoughts (Fodor, 1975; Piaget, 1926)? Or does
learning a language somehow mean learning to think in that language? A famous
hypothesis, outlined by Benjamin Whorf (1956), asserts that the categories and
relations that we use to understand the world come from our particular
language, so that speakers of different languages conceptualize the world in
different ways. Language acquisition, then, would be learning to think, not
just learning to talk.
Learning
and Innateness. All humans talk but no house pets or house plants do, no matter
how pampered, so heredity must be involved in language. But a child growing up
in Japan speaks Japanese whereas the same child brought up in California would
speak English, so the environment is also crucial. Thus there is no question
about whether heredity or environment is involved in language, or even whether
one or the other is "more important." Instead, language acquisition
might be our best hope of finding out how heredity and environment interact. We
know that adult language is intricately complex, and we know that children
become adults. Therefore something in the child's mind must be capable of
attaining that complexity. Any theory that posits too little innate structure,
so that its hypothetical child ends up speaking something less than a real
language, must be false. The same is true for any theory that posits too much innate
structure, so that the hypothetical child can acquire English but not, say,
Bantu or Vietnamese.
�
The scientific study of language acquisition began around the same time
as the birth of cognitive science, in the late 1950's. We can see now why that
is not a coincidence. The historical catalyst was Noam Chomsky's review of
Skinner's Verbal Behavior (Chomsky, 1959). At that time, Anglo-American natural
science, social science, and philosophy had come to a virtual consensus about
the answers to the questions listed above. The mind consisted of sensorimotor
abilities plus a few simple laws of learning governing gradual changes in an
organism's behavioral repertoire. Therefore language must be learned, it cannot
be a module, and thinking must be a form of verbal behavior, since verbal
behavior is the prime manifestation of "thought" that can be observed
externally. Chomsky argued that language acquisition falsified these beliefs in
a single stroke: children learn languages that are governed by highly subtle
and abstract principles, and they do so without explicit instruction or any
other environmental clues to the nature of such principles. Hence language
acquisition depends on an innate, species-specific module that is distinct from
general intelligence. Much of the debate in language acquisition has attempted
to test this once-revolutionary, and still controversial, collection of ideas.
The implications extend to the rest of human cognition.
Though artificial chimp signaling systems have some analogies to human
language (e.g., use in communication, combinations of more basic signals), it
seems unlikely that they are homologous. Chimpanzees require massive regimented
teaching sequences contrived by humans to acquire quite rudimentary abilities,
mostly limited to a small number of signs, strung together in repetitive,
quasi-random sequences, used with the intent of requesting food or tickling
(Terrace, Petitto, Sanders, & Bever, 1979; Seidenberg & Petitto, 1979,
1987; Seidenberg, 1986; Wallman, 1992; Pinker, 1994a). This contrasts sharply
with human children, who pick up thousands of words spontaneously, combine them
in structured sequences where every word has a determinate role, respect the
word order of the adult language, and use sentences for a variety of purposes
such as commenting on interesting objects.
This lack of homology does not, by the way, cast doubt on a
gradualistic Darwinian account of language evolution. Humans did not evolve
directly from chimpanzees. Both derived from common ancestor, probably around
6-7 million years ago. This leaves about 300,000 generations in which language
could have evolved gradually in the lineage leading to humans, after it split
off from the lineage leading to chimpanzees. Presumably language evolved in the
human lineage for two reasons: our ancestors developed technology and knowledge
of the local environment in their lifetimes, and were involved in extensive
reciprocal cooperation. This allowed them to benefit by sharing hard-won
knowledge with their kin and exchanging it with their neighbors (Pinker &
Bloom, 1990).
Do we need to posit a special, species-specific Language Acquisition Device for our ability to rapidly learn language, or can it be explained in terms of more universal cognitive processes, i.e. treating language as just another set of symbols to be shuffled about.
Marcus et al�s studies on over-regularisation (applying regularisation rules such as �run-ran� in the past tense to �bring� to get �brung�) indicate a conflict in the sequence of their semantic and syntactic developments, as they initially learn the correct form, then learn the regularisation rule and then apply it too widely.
how else but by learning language as a set of linguistic rules can we learn it? our construction of an infinite number of sentences in daily conversation is evidence that some sort of generative system of rules must be in some way instantiated in our brains, since no other possibility could account for language�s power
u-learning � for instance, infants first learn all verbs by rote as a series of exceptions. it is only later that they abstract to the rules governing the use of verbs, i.e. the regular onesand the morphology governing their inflection(???), e.g. the tenses, conjugation. indeed they overregularise, understandably but mistakenly extending the use of such rules to irregular verbs that they previously used correctly � it is only later still that language often consists of general rules with many exceptions
what alternatives to a system of rules are there?
langauge as a huge finite state machine � we learn all the possible sentences in the world and then select the appropriate one � ridiculous because of combinatorial explosion
word chains � slightly abstracted, shows possible phrases as trees of categories, say, with sentences being comprised of chains of such phrases. again, not possible, because of the totally modular way in which we can combine phrases, e.g. recursively, and with long-distance dependencies � again, the word chains would become impossibly long
so we are left with a system of rules that somehow allow us to determine and generate legitimate use of language.
plunkett talks about 2 different types of rules/device, a LAD and a language generation device???
Rolls: language = �symbols bound by a syntax�
language always seems to be formed at approximately the same level of complexity, with exceptions and complications, some unnecessary, others adding to the richness and expressive power of the language � this constant level of complexity hints at an innate, human neural process at work